If needed, install dependencies like so:¶

In [ ]:
!pip install pandoc jupyterlab matplotlib numpy scipy notebook nbconvert mne

This is merely helpful to demonstrate why some code cannot run in jupyter notebook. In these instances, like when we want interactive graphs, we must use the traditional method of running python code in order for it to operate. This is due to environment variable differences between how the python interpreter is run¶

In [ ]:
import os
import glob
import mne
import matplotlib.pyplot as plt
from datetime import datetime
import shutil

def plot_psd(edf_file, output_directory):
    try:
        # Load the EDF/BDF file
        raw = mne.io.read_raw_edf(edf_file, preload=True)
        
        # Create a plot of the PSD
        fig = raw.plot_psd(show=False)
        
        # Save the plot to a PNG file
        output_filename = os.path.basename(edf_file).replace('.edf', '').replace('.bdf', '') + '_psd.png'
        output_path = os.path.join(output_directory, output_filename)
        fig.savefig(output_path)
        plt.close(fig)  # Close the figure to free up memory
        print(f"Saved PSD plot for {edf_file} to {output_path}")
        
    except Exception as e:
        print(f"Error processing {edf_file}: {e}")

def find_edf_files(parent_directory):
    extensions = ['*.bdf', '*.edf', '*.edf+']
    edf_files = []
    for ext in extensions:
        edf_files.extend(glob.glob(os.path.join(parent_directory, '**', ext), recursive=True))
    return edf_files

def main(parent_directory, output_directory):
    edf_files = find_edf_files(parent_directory)

    for edf_file in edf_files:
        print(f"Processing file: {edf_file}")
        plot_psd(edf_file, output_directory)

if __name__ == '__main__':
    parent_directory = 'EDF+'
    output_directory = 'PSD_Plots'
    
    # Create a timestamped subfolder in the output directory
    timestamp = datetime.now().strftime('%Y%m%d_%H%M%S')
    timestamped_output_directory = os.path.join(output_directory, timestamp)
    os.makedirs(timestamped_output_directory, exist_ok=True)

    ###
    ###
    ###
    
    #WE CANNOT DO THIS in jupyter notebook.  
    
    
    # Save a copy of the script into the timestamped subfolder
    # script_path = '' #os.path.realpath(__file__)
    # shutil.copy(script_path, os.path.join(timestamped_output_directory, os.path.basename(script_path)))

    main(parent_directory, timestamped_output_directory)

Let's inspect the methods we have available with mne python by inspecting the raw object created earlier¶

In [ ]:
for i in dir(raw):
    if '_' in i:
        continue
    print(i)
annotations
anonymize
append
close
copy
crop
describe
export
filenames
filter
info
pick
plot
preload
proj
resample
save
times

Let's inspect events¶

In [ ]:
import mne
import numpy as np
import matplotlib.pyplot as plt

# Read your EDF file
raw = mne.io.read_raw_edf(r'EDF+\Emotion Recognition\062000\062000_2024-04-24T102551.574531_EPOCFLEX_182887_2024.04.24T10.25.51.04.00.md.edf', preload=True)
events = mne.events_from_annotations(raw)
for i in events:
    print(('Type:',type(i)),i)
Extracting EDF parameters from f:\EEG\DATA2OTHERFORMATS\EDF+\Emotion Recognition\062000\062000_2024-04-24T102551.574531_EPOCFLEX_182887_2024.04.24T10.25.51.04.00.md.edf...
EDF file detected
Setting channel info structure...
Creating raw.info structure...
Reading 0 ... 118527  =      0.000 ...   925.992 secs...
Used Annotations descriptions: ['neutralVideo,neutralVideo,-1,12', 'neutralVideo,neutralVideo,-1,15', 'neutralVideo,neutralVideo,-1,18', 'neutralVideo,neutralVideo,-1,3', 'neutralVideo,neutralVideo,-1,6', 'neutralVideo,neutralVideo,-1,9', 'rating,rating,-1,11', 'rating,rating,-1,14', 'rating,rating,-1,17', 'rating,rating,-1,2', 'rating,rating,-1,5', 'rating,rating,-1,8', 'videos\\1 excited\\3 general\\ratatChaseScene.mp4,videos\\1 excited\\3 general\\ratatChaseScene.mp4,-1,1', 'videos\\2 happy\\scooby.mp4,videos\\2 happy\\scooby.mp4,-1,4', 'videos\\3 motivated\\motivationalAuthor.mp4,videos\\3 motivated\\motivationalAuthor.mp4,-1,7', 'videos\\4 relaxed\\2 nature\\waterfall.mp4,videos\\4 relaxed\\2 nature\\waterfall.mp4,-1,10', 'videos\\5 sad\\1 animals\\saddogs.mp4,videos\\5 sad\\1 animals\\saddogs.mp4,-1,13', 'videos\\6 horror\\conjuring.mp4,videos\\6 horror\\conjuring.mp4,-1,16', 'videos\\7 disgusted\\trainspotting.mp4,videos\\7 disgusted\\trainspotting.mp4,-1,19']
('Type:', <class 'numpy.ndarray'>) [[  8801      0     13]
 [ 17626      0     10]
 [ 19634      0      4]
 [ 27979      0     14]
 [ 35687      0     11]
 [ 36844      0      5]
 [ 45202      0     15]
 [ 53885      0     12]
 [ 55148      0      6]
 [ 63506      0     16]
 [ 71258      0      7]
 [ 72080      0      1]
 [ 80436      0     17]
 [ 88466      0      8]
 [ 89343      0      2]
 [ 97691      0     18]
 [105997      0      9]
 [106937      0      3]
 [115275      0     19]]
('Type:', <class 'dict'>) {'neutralVideo,neutralVideo,-1,12': 1, 'neutralVideo,neutralVideo,-1,15': 2, 'neutralVideo,neutralVideo,-1,18': 3, 'neutralVideo,neutralVideo,-1,3': 4, 'neutralVideo,neutralVideo,-1,6': 5, 'neutralVideo,neutralVideo,-1,9': 6, 'rating,rating,-1,11': 7, 'rating,rating,-1,14': 8, 'rating,rating,-1,17': 9, 'rating,rating,-1,2': 10, 'rating,rating,-1,5': 11, 'rating,rating,-1,8': 12, 'videos\\1 excited\\3 general\\ratatChaseScene.mp4,videos\\1 excited\\3 general\\ratatChaseScene.mp4,-1,1': 13, 'videos\\2 happy\\scooby.mp4,videos\\2 happy\\scooby.mp4,-1,4': 14, 'videos\\3 motivated\\motivationalAuthor.mp4,videos\\3 motivated\\motivationalAuthor.mp4,-1,7': 15, 'videos\\4 relaxed\\2 nature\\waterfall.mp4,videos\\4 relaxed\\2 nature\\waterfall.mp4,-1,10': 16, 'videos\\5 sad\\1 animals\\saddogs.mp4,videos\\5 sad\\1 animals\\saddogs.mp4,-1,13': 17, 'videos\\6 horror\\conjuring.mp4,videos\\6 horror\\conjuring.mp4,-1,16': 18, 'videos\\7 disgusted\\trainspotting.mp4,videos\\7 disgusted\\trainspotting.mp4,-1,19': 19}

Let's inspect events with different, more complete output¶

In [ ]:
import mne
import numpy as np

# Read your EDF file
raw = mne.io.read_raw_edf(r'EDF+\103918\103918.edf', preload=True)
events, event_dict = mne.events_from_annotations(raw)

# Get time periods for each event
sfreq = raw.info['sfreq']  # Get the sampling frequency
time_periods = []
for event_idx, event in enumerate(events):
    onset_sample = event[0]
    duration_seconds = event[2] / sfreq
    time_period = (raw.times[onset_sample], raw.times[onset_sample] + duration_seconds)
    time_periods.append(time_period)

# Print events with their corresponding time periods
for event_idx, (event, time_period) in enumerate(zip(events, time_periods)):
    event_name = list(event_dict.keys())[list(event_dict.values()).index(event[2])]
    print(f"Event {event_idx + 1}: {event_name}, Time Period: {time_period[0]} - {time_period[1]} seconds")
Extracting EDF parameters from e:\WEBSITE\EDF_analysis\EDF+\103918\103918.edf...
EDF file detected
Setting channel info structure...
EDF file detected
Setting channel info structure...
Creating raw.info structure...
Reading 0 ... 143999  =      0.000 ...  1124.992 secs...
Used Annotations descriptions: ['neutralVideo,neutralVideo,-1,12', 'neutralVideo,neutralVideo,-1,15', 'neutralVideo,neutralVideo,-1,18', 'neutralVideo,neutralVideo,-1,21', 'neutralVideo,neutralVideo,-1,24', 'neutralVideo,neutralVideo,-1,3', 'neutralVideo,neutralVideo,-1,6', 'neutralVideo,neutralVideo,-1,9', 'rating,rating,-1,11', 'rating,rating,-1,14', 'rating,rating,-1,17', 'rating,rating,-1,2', 'rating,rating,-1,20', 'rating,rating,-1,23', 'rating,rating,-1,5', 'rating,rating,-1,8', 'videos\\1 excited\\1 motorsports\\kenMiles.mp4,videos\\1 excited\\1 motorsports\\kenMiles.mp4,-1,1', 'videos\\2 happy\\scooby.mp4,videos\\2 happy\\scooby.mp4,-1,4', 'videos\\3 motivated\\motivationalAuthor.mp4,videos\\3 motivated\\motivationalAuthor.mp4,-1,7', 'videos\\4 relaxed\\2 nature\\waterfall.mp4,videos\\4 relaxed\\2 nature\\waterfall.mp4,-1,10', 'videos\\5 sad\\1 animals\\saddogs.mp4,videos\\5 sad\\1 animals\\saddogs.mp4,-1,13', 'videos\\6 horror\\conjuring.mp4,videos\\6 horror\\conjuring.mp4,-1,16', 'videos\\7 disgusted\\trainspotting.mp4,videos\\7 disgusted\\trainspotting.mp4,-1,19', 'videos\\8 angry\\1 animals\\angrydogs.mp4,videos\\8 angry\\1 animals\\angrydogs.mp4,-1,22']
Event 1: videos\1 excited\1 motorsports\kenMiles.mp4,videos\1 excited\1 motorsports\kenMiles.mp4,-1,1, Time Period: 95.375 - 95.5078125 seconds
Event 2: rating,rating,-1,2, Time Period: 163.7421875 - 163.8359375 seconds
Event 3: neutralVideo,neutralVideo,-1,3, Time Period: 173.28125 - 173.328125 seconds
Event 4: videos\2 happy\scooby.mp4,videos\2 happy\scooby.mp4,-1,4, Time Period: 238.6484375 - 238.7890625 seconds
Event 5: rating,rating,-1,5, Time Period: 298.859375 - 298.9765625 seconds
Event 6: neutralVideo,neutralVideo,-1,6, Time Period: 305.2265625 - 305.28125 seconds
Event 7: videos\3 motivated\motivationalAuthor.mp4,videos\3 motivated\motivationalAuthor.mp4,-1,7, Time Period: 370.578125 - 370.7265625 seconds
Event 8: rating,rating,-1,8, Time Period: 438.65625 - 438.78125 seconds
Event 9: neutralVideo,neutralVideo,-1,9, Time Period: 445.3203125 - 445.3828125 seconds
Event 10: videos\4 relaxed\2 nature\waterfall.mp4,videos\4 relaxed\2 nature\waterfall.mp4,-1,10, Time Period: 510.546875 - 510.703125 seconds
Event 11: rating,rating,-1,11, Time Period: 571.09375 - 571.1640625 seconds
Event 12: neutralVideo,neutralVideo,-1,12, Time Period: 577.484375 - 577.4921875 seconds
Event 13: videos\5 sad\1 animals\saddogs.mp4,videos\5 sad\1 animals\saddogs.mp4,-1,13, Time Period: 642.625 - 642.7890625 seconds
Event 14: rating,rating,-1,14, Time Period: 705.6484375 - 705.7265625 seconds
Event 15: neutralVideo,neutralVideo,-1,15, Time Period: 713.828125 - 713.84375 seconds
Event 16: videos\6 horror\conjuring.mp4,videos\6 horror\conjuring.mp4,-1,16, Time Period: 778.9921875 - 779.1640625 seconds
Event 17: rating,rating,-1,17, Time Period: 843.9140625 - 844.0 seconds
Event 18: neutralVideo,neutralVideo,-1,18, Time Period: 851.984375 - 852.0078125 seconds
Event 19: videos\7 disgusted\trainspotting.mp4,videos\7 disgusted\trainspotting.mp4,-1,19, Time Period: 917.2421875 - 917.421875 seconds
Event 20: rating,rating,-1,20, Time Period: 977.296875 - 977.3984375 seconds
Event 21: neutralVideo,neutralVideo,-1,21, Time Period: 982.5234375 - 982.5546875 seconds
Event 22: videos\8 angry\1 animals\angrydogs.mp4,videos\8 angry\1 animals\angrydogs.mp4,-1,22, Time Period: 1047.8046875 - 1047.9921875 seconds
Event 23: rating,rating,-1,23, Time Period: 1113.1796875 - 1113.2890625 seconds
Event 24: neutralVideo,neutralVideo,-1,24, Time Period: 1119.078125 - 1119.1171875 seconds
In [ ]:
import mne
import numpy as np

# Read your EDF file
raw = mne.io.read_raw_edf(r'EDF+\103918\103918.edf', preload=True)
events, event_dict = mne.events_from_annotations(raw)

# Get time periods for events with "videos\" in their descriptions
sfreq = raw.info['sfreq']  # Get the sampling frequency
time_periods = []
video_events = []
for event_idx, event in enumerate(events):
    onset_sample = event[0]
    duration_seconds = event[2] / sfreq
    event_description = list(event_dict.keys())[list(event_dict.values()).index(event[2])]
    if "videos\\" in event_description:
        time_period = (raw.times[onset_sample], raw.times[onset_sample] + duration_seconds)
        time_periods.append(time_period)
        video_events.append((event_description, time_period))

# Print video events with their corresponding time periods
for event_idx, (event_description, time_period) in enumerate(video_events):
    print(f"Event {event_idx + 1}: {event_description}, Time Period: {time_period[0]} - {time_period[1]} seconds")
Extracting EDF parameters from e:\WEBSITE\EDF_analysis\EDF+\103918\103918.edf...
EDF file detected
Setting channel info structure...
EDF file detected
Setting channel info structure...
Creating raw.info structure...
Reading 0 ... 143999  =      0.000 ...  1124.992 secs...
Used Annotations descriptions: ['neutralVideo,neutralVideo,-1,12', 'neutralVideo,neutralVideo,-1,15', 'neutralVideo,neutralVideo,-1,18', 'neutralVideo,neutralVideo,-1,21', 'neutralVideo,neutralVideo,-1,24', 'neutralVideo,neutralVideo,-1,3', 'neutralVideo,neutralVideo,-1,6', 'neutralVideo,neutralVideo,-1,9', 'rating,rating,-1,11', 'rating,rating,-1,14', 'rating,rating,-1,17', 'rating,rating,-1,2', 'rating,rating,-1,20', 'rating,rating,-1,23', 'rating,rating,-1,5', 'rating,rating,-1,8', 'videos\\1 excited\\1 motorsports\\kenMiles.mp4,videos\\1 excited\\1 motorsports\\kenMiles.mp4,-1,1', 'videos\\2 happy\\scooby.mp4,videos\\2 happy\\scooby.mp4,-1,4', 'videos\\3 motivated\\motivationalAuthor.mp4,videos\\3 motivated\\motivationalAuthor.mp4,-1,7', 'videos\\4 relaxed\\2 nature\\waterfall.mp4,videos\\4 relaxed\\2 nature\\waterfall.mp4,-1,10', 'videos\\5 sad\\1 animals\\saddogs.mp4,videos\\5 sad\\1 animals\\saddogs.mp4,-1,13', 'videos\\6 horror\\conjuring.mp4,videos\\6 horror\\conjuring.mp4,-1,16', 'videos\\7 disgusted\\trainspotting.mp4,videos\\7 disgusted\\trainspotting.mp4,-1,19', 'videos\\8 angry\\1 animals\\angrydogs.mp4,videos\\8 angry\\1 animals\\angrydogs.mp4,-1,22']
Event 1: videos\1 excited\1 motorsports\kenMiles.mp4,videos\1 excited\1 motorsports\kenMiles.mp4,-1,1, Time Period: 95.375 - 95.5078125 seconds
Event 2: videos\2 happy\scooby.mp4,videos\2 happy\scooby.mp4,-1,4, Time Period: 238.6484375 - 238.7890625 seconds
Event 3: videos\3 motivated\motivationalAuthor.mp4,videos\3 motivated\motivationalAuthor.mp4,-1,7, Time Period: 370.578125 - 370.7265625 seconds
Event 4: videos\4 relaxed\2 nature\waterfall.mp4,videos\4 relaxed\2 nature\waterfall.mp4,-1,10, Time Period: 510.546875 - 510.703125 seconds
Event 5: videos\5 sad\1 animals\saddogs.mp4,videos\5 sad\1 animals\saddogs.mp4,-1,13, Time Period: 642.625 - 642.7890625 seconds
Event 6: videos\6 horror\conjuring.mp4,videos\6 horror\conjuring.mp4,-1,16, Time Period: 778.9921875 - 779.1640625 seconds
Event 7: videos\7 disgusted\trainspotting.mp4,videos\7 disgusted\trainspotting.mp4,-1,19, Time Period: 917.2421875 - 917.421875 seconds
Event 8: videos\8 angry\1 animals\angrydogs.mp4,videos\8 angry\1 animals\angrydogs.mp4,-1,22, Time Period: 1047.8046875 - 1047.9921875 seconds
In [ ]:
import mne
import numpy as np

# Read your EDF file
raw = mne.io.read_raw_edf(r'EDF+\103918\103918.edf', preload=True)
events, event_dict = mne.events_from_annotations(raw)

# Define baseline period (0 to 30 seconds)
baseline_start = 0
baseline_end = 30

# Get time periods for events with "videos\" in their descriptions, skipping the first 30 seconds
sfreq = raw.info['sfreq']  # Get the sampling frequency
time_periods = []
video_events = []
for event_idx, event in enumerate(events):
    onset_sample = event[0]
    duration_seconds = event[2] / sfreq
    event_description = list(event_dict.keys())[list(event_dict.values()).index(event[2])]
    if "videos\\" in event_description:
        start_time = raw.times[onset_sample] + 30  # Skip the first 30 seconds
        end_time = start_time + 30  # Grab the subsequent 30 seconds
        time_period = (start_time, end_time)
        time_periods.append(time_period)
        video_events.append((event_description, time_period))

# Extract baseline period
baseline_data, _ = raw[:, int(baseline_start * sfreq):int(baseline_end * sfreq)]

# Print video events with their corresponding time periods
for event_idx, (event_description, time_period) in enumerate(video_events):
    print(f"Event {event_idx + 1}: {event_description}, Time Period: {time_period[0]} - {time_period[1]} seconds")

# Now you can use 'baseline_data' as your baseline period
Extracting EDF parameters from e:\WEBSITE\EDF_analysis\EDF+\103918\103918.edf...
EDF file detected
Setting channel info structure...
Creating raw.info structure...
EDF file detected
Setting channel info structure...
Creating raw.info structure...
Reading 0 ... 143999  =      0.000 ...  1124.992 secs...
Used Annotations descriptions: ['neutralVideo,neutralVideo,-1,12', 'neutralVideo,neutralVideo,-1,15', 'neutralVideo,neutralVideo,-1,18', 'neutralVideo,neutralVideo,-1,21', 'neutralVideo,neutralVideo,-1,24', 'neutralVideo,neutralVideo,-1,3', 'neutralVideo,neutralVideo,-1,6', 'neutralVideo,neutralVideo,-1,9', 'rating,rating,-1,11', 'rating,rating,-1,14', 'rating,rating,-1,17', 'rating,rating,-1,2', 'rating,rating,-1,20', 'rating,rating,-1,23', 'rating,rating,-1,5', 'rating,rating,-1,8', 'videos\\1 excited\\1 motorsports\\kenMiles.mp4,videos\\1 excited\\1 motorsports\\kenMiles.mp4,-1,1', 'videos\\2 happy\\scooby.mp4,videos\\2 happy\\scooby.mp4,-1,4', 'videos\\3 motivated\\motivationalAuthor.mp4,videos\\3 motivated\\motivationalAuthor.mp4,-1,7', 'videos\\4 relaxed\\2 nature\\waterfall.mp4,videos\\4 relaxed\\2 nature\\waterfall.mp4,-1,10', 'videos\\5 sad\\1 animals\\saddogs.mp4,videos\\5 sad\\1 animals\\saddogs.mp4,-1,13', 'videos\\6 horror\\conjuring.mp4,videos\\6 horror\\conjuring.mp4,-1,16', 'videos\\7 disgusted\\trainspotting.mp4,videos\\7 disgusted\\trainspotting.mp4,-1,19', 'videos\\8 angry\\1 animals\\angrydogs.mp4,videos\\8 angry\\1 animals\\angrydogs.mp4,-1,22']
Event 1: videos\1 excited\1 motorsports\kenMiles.mp4,videos\1 excited\1 motorsports\kenMiles.mp4,-1,1, Time Period: 125.375 - 155.375 seconds
Event 2: videos\2 happy\scooby.mp4,videos\2 happy\scooby.mp4,-1,4, Time Period: 268.6484375 - 298.6484375 seconds
Event 3: videos\3 motivated\motivationalAuthor.mp4,videos\3 motivated\motivationalAuthor.mp4,-1,7, Time Period: 400.578125 - 430.578125 seconds
Event 4: videos\4 relaxed\2 nature\waterfall.mp4,videos\4 relaxed\2 nature\waterfall.mp4,-1,10, Time Period: 540.546875 - 570.546875 seconds
Event 5: videos\5 sad\1 animals\saddogs.mp4,videos\5 sad\1 animals\saddogs.mp4,-1,13, Time Period: 672.625 - 702.625 seconds
Event 6: videos\6 horror\conjuring.mp4,videos\6 horror\conjuring.mp4,-1,16, Time Period: 808.9921875 - 838.9921875 seconds
Event 7: videos\7 disgusted\trainspotting.mp4,videos\7 disgusted\trainspotting.mp4,-1,19, Time Period: 947.2421875 - 977.2421875 seconds
Event 8: videos\8 angry\1 animals\angrydogs.mp4,videos\8 angry\1 animals\angrydogs.mp4,-1,22, Time Period: 1077.8046875 - 1107.8046875 seconds
In [ ]:
import mne
import numpy as np

# Read your EDF file
raw = mne.io.read_raw_edf(r'EDF+\Emotion Recognition\103918\103918_2024-04-17T100524.975995_EPOCFLEX_182887_2024.04.17T10.05.24.04.00.md.edf', preload=True)
events, event_dict = mne.events_from_annotations(raw)
eeg_channels = ['Cz', 'Fz', 'Fp1', 'F7', 'F3', 'FC1', 'C3', 'FC5', 'FT9', 'T7', 'CP5', 'CP1', 'P3', 'P7', 'PO9', 'O1', 'Pz', 'Oz', 'O2', 'PO10', 'P8', 'P4', 'CP2', 'CP6', 'T8', 'FT10', 'FC6', 'C4', 'FC2', 'F4', 'F8', 'Fp2']

# Read your EDF file
raw = mne.io.read_raw_edf(r'e:\EEG\Archive\DATA2OTHERFORMATS\zachary_edf+.edf', preload=True, verbose=True, infer_types=True, include=eeg_channels)
montage = mne.channels.make_standard_montage('standard_1020')
raw.set_montage(montage, on_missing='ignore')

# Apply preprocessing steps
raw.set_eeg_reference('average', projection=True)  # set EEG average reference
raw.filter(l_freq=0.5, h_freq=40)  # Apply band-pass filter
# Define baseline period (0 to 30 seconds)
baseline_start = 0
baseline_end = 30

# Get time periods for events with "videos\" in their descriptions, skipping the first 30 seconds
sfreq = raw.info['sfreq']  # Get the sampling frequency
time_periods = []
video_events = []
for event_idx, event in enumerate(events):
    onset_sample = event[0]
    duration_seconds = event[2] / sfreq
    event_description = list(event_dict.keys())[list(event_dict.values()).index(event[2])]
    if "videos\\" in event_description:
        start_time = raw.times[onset_sample] + 30  # Skip the first 30 seconds
        end_time = start_time + 30  # Grab the subsequent 30 seconds
        time_period = (start_time, end_time)
        time_periods.append(time_period)
        video_events.append((event_description, time_period))

# Extract baseline period
baseline_data, _ = raw[:, int(baseline_start * sfreq):int(baseline_end * sfreq)]

# Print video events with their corresponding time periods
for event_idx, (event_description, time_period) in enumerate(video_events):
    print(f"Event {event_idx + 1}: {event_description}, Time Period: {time_period[0]} - {time_period[1]} seconds")

# Now you can use 'baseline_data' as your baseline period

# Apply ICA to the raw EEG data to remove artifacts
from mne.preprocessing import ICA

# Set up ICA
ica = ICA(n_components=20, random_state=97, max_iter=800)

# Fit ICA on the raw EEG data
ica.fit(raw)

# Plot ICA components to manually inspect and identify components that capture eye blinks
ica.plot_components()

# Use built-in methods to automatically find EOG-like components (since we don't have EOG data, we use EEG channels)
eog_indices, eog_scores = ica.find_bads_eog(raw, ch_name='Fp1')  # assuming Fp1 is most likely to capture eye blinks
ica.plot_scores(eog_scores)

# Exclude the identified EOG-related components
ica.exclude = eog_indices

# Apply the ICA solution to the raw data to remove the artifacts
raw_clean = ica.apply(raw.copy())
print('Now printing raw graph')
raw.plot()
# Plot the cleaned data to verify the result
print('Now printing cleaned graph')
raw_clean.plot()


# If needed, you can now proceed with your analysis on 'raw_clean'
Extracting EDF parameters from e:\EEG\Archive\DATA2OTHERFORMATS\EDF+\Emotion Recognition\103918\103918_2024-04-17T100524.975995_EPOCFLEX_182887_2024.04.17T10.05.24.04.00.md.edf...
EDF file detected
Setting channel info structure...
Creating raw.info structure...
Reading 0 ... 143999  =      0.000 ...  1124.992 secs...
Used Annotations descriptions: ['neutralVideo,neutralVideo,-1,12', 'neutralVideo,neutralVideo,-1,15', 'neutralVideo,neutralVideo,-1,18', 'neutralVideo,neutralVideo,-1,21', 'neutralVideo,neutralVideo,-1,24', 'neutralVideo,neutralVideo,-1,3', 'neutralVideo,neutralVideo,-1,6', 'neutralVideo,neutralVideo,-1,9', 'rating,rating,-1,11', 'rating,rating,-1,14', 'rating,rating,-1,17', 'rating,rating,-1,2', 'rating,rating,-1,20', 'rating,rating,-1,23', 'rating,rating,-1,5', 'rating,rating,-1,8', 'videos\\1 excited\\1 motorsports\\kenMiles.mp4,videos\\1 excited\\1 motorsports\\kenMiles.mp4,-1,1', 'videos\\2 happy\\scooby.mp4,videos\\2 happy\\scooby.mp4,-1,4', 'videos\\3 motivated\\motivationalAuthor.mp4,videos\\3 motivated\\motivationalAuthor.mp4,-1,7', 'videos\\4 relaxed\\2 nature\\waterfall.mp4,videos\\4 relaxed\\2 nature\\waterfall.mp4,-1,10', 'videos\\5 sad\\1 animals\\saddogs.mp4,videos\\5 sad\\1 animals\\saddogs.mp4,-1,13', 'videos\\6 horror\\conjuring.mp4,videos\\6 horror\\conjuring.mp4,-1,16', 'videos\\7 disgusted\\trainspotting.mp4,videos\\7 disgusted\\trainspotting.mp4,-1,19', 'videos\\8 angry\\1 animals\\angrydogs.mp4,videos\\8 angry\\1 animals\\angrydogs.mp4,-1,22']
Extracting EDF parameters from e:\EEG\Archive\DATA2OTHERFORMATS\zachary_edf+.edf...
EDF file detected
Setting channel info structure...
Creating raw.info structure...
Reading 0 ... 139775  =      0.000 ...  1091.992 secs...
EEG channel type selected for re-referencing
Adding average EEG reference projection.
1 projection items deactivated
Average reference projection was added, but has not been applied yet. Use the apply_proj method to apply it.
Filtering raw data in 1 contiguous segment
Setting up band-pass filter from 0.5 - 40 Hz

FIR filter parameters
---------------------
Designing a one-pass, zero-phase, non-causal bandpass filter:
- Windowed time-domain design (firwin) method
- Hamming window with 0.0194 passband ripple and 53 dB stopband attenuation
- Lower passband edge: 0.50
- Lower transition bandwidth: 0.50 Hz (-6 dB cutoff frequency: 0.25 Hz)
- Upper passband edge: 40.00 Hz
- Upper transition bandwidth: 10.00 Hz (-6 dB cutoff frequency: 45.00 Hz)
- Filter length: 845 samples (6.602 s)

Event 1: videos\1 excited\1 motorsports\kenMiles.mp4,videos\1 excited\1 motorsports\kenMiles.mp4,-1,1, Time Period: 125.375 - 155.375 seconds
Event 2: videos\2 happy\scooby.mp4,videos\2 happy\scooby.mp4,-1,4, Time Period: 268.6484375 - 298.6484375 seconds
Event 3: videos\3 motivated\motivationalAuthor.mp4,videos\3 motivated\motivationalAuthor.mp4,-1,7, Time Period: 400.578125 - 430.578125 seconds
Event 4: videos\4 relaxed\2 nature\waterfall.mp4,videos\4 relaxed\2 nature\waterfall.mp4,-1,10, Time Period: 540.546875 - 570.546875 seconds
Event 5: videos\5 sad\1 animals\saddogs.mp4,videos\5 sad\1 animals\saddogs.mp4,-1,13, Time Period: 672.625 - 702.625 seconds
Event 6: videos\6 horror\conjuring.mp4,videos\6 horror\conjuring.mp4,-1,16, Time Period: 808.9921875 - 838.9921875 seconds
Event 7: videos\7 disgusted\trainspotting.mp4,videos\7 disgusted\trainspotting.mp4,-1,19, Time Period: 947.2421875 - 977.2421875 seconds
Event 8: videos\8 angry\1 animals\angrydogs.mp4,videos\8 angry\1 animals\angrydogs.mp4,-1,22, Time Period: 1077.8046875 - 1107.8046875 seconds
Fitting ICA to data using 32 channels (please be patient, this may take a while)
[Parallel(n_jobs=1)]: Done  17 tasks      | elapsed:    0.0s
Selecting by number: 20 components
Fitting ICA took 3.3s.
No description has been provided for this image
Using EOG channel: Fp1
... filtering ICA sources
Setting up band-pass filter from 1 - 10 Hz

FIR filter parameters
---------------------
Designing a two-pass forward and reverse, zero-phase, non-causal bandpass filter:
- Windowed frequency-domain design (firwin2) method
- Hann window
- Lower passband edge: 1.00
- Lower transition bandwidth: 0.50 Hz (-12 dB cutoff frequency: 0.75 Hz)
- Upper passband edge: 10.00 Hz
- Upper transition bandwidth: 0.50 Hz (-12 dB cutoff frequency: 10.25 Hz)
- Filter length: 1280 samples (10.000 s)

... filtering target
Setting up band-pass filter from 1 - 10 Hz

FIR filter parameters
---------------------
Designing a two-pass forward and reverse, zero-phase, non-causal bandpass filter:
- Windowed frequency-domain design (firwin2) method
- Hann window
- Lower passband edge: 1.00
- Lower transition bandwidth: 0.50 Hz (-12 dB cutoff frequency: 0.75 Hz)
- Upper passband edge: 10.00 Hz
- Upper transition bandwidth: 0.50 Hz (-12 dB cutoff frequency: 10.25 Hz)
- Filter length: 1280 samples (10.000 s)

[Parallel(n_jobs=1)]: Done  17 tasks      | elapsed:    0.0s
No description has been provided for this image
Applying ICA to Raw instance
    Transforming to ICA space (20 components)
    Zeroing out 2 ICA components
    Projecting back using 32 PCA components
Now printing cleaned graph
No description has been provided for this image
Now printing raw graph
No description has been provided for this image
Out[ ]:
No description has been provided for this image
In [ ]:
for i in dir(ica):
    if '__' in i: # Exclude ones we don't want, private, etc. 
        continue
    print(i)
_apply_epochs
_apply_evoked
_apply_raw
_check_exclude
_check_n_pca_components
_check_target
_compute_pre_whitener
_do_proj
_export_info
_find_bads_ch
_fit
_fit_epochs
_fit_raw
_get_ctps_threshold
_get_explained_variance_ratio_one_ch_type
_get_infos_for_repr
_get_picks
_ica_names
_max_pca_components
_pick_sources
_pre_whiten
_repr_html_
_reset
_sources_as_epochs
_sources_as_evoked
_sources_as_raw
_transform
_transform_epochs
_transform_evoked
_transform_raw
_update_ica_names
_update_mixing_matrix
allow_ref_meg
apply
ch_names
compensation_grade
copy
current_fit
exclude
find_bads_ecg
find_bads_eog
find_bads_muscle
find_bads_ref
fit
fit_params
get_channel_types
get_components
get_explained_variance_ratio
get_sources
info
labels_
max_iter
method
mixing_matrix_
n_components
n_components_
n_iter_
n_pca_components
n_samples_
noise_cov
pca_components_
pca_explained_variance_
pca_mean_
plot_components
plot_overlay
plot_properties
plot_scores
plot_sources
pre_whitener_
random_state
reject_
save
score_sources
unmixing_matrix_

Let's try find_bads_eog¶

Remaining methods to try:¶

find_bads_ecg¶

find_bads_muscle¶

find_bads_ref¶

In [ ]:
import mne
from mne.preprocessing import ICA

# Read your EDF file
raw = mne.io.read_raw_edf(r'EDF+\Zacker\Zacker.edf', preload=True, verbose=True, infer_types=True)
eeg_channels = ['Cz', 'Fz', 'Fp1', 'F7', 'F3', 'FC1', 'C3', 'FC5', 'FT9', 'T7', 'CP5', 'CP1', 'P3', 'P7', 'PO9', 'O1', 'Pz', 'Oz', 'O2', 'PO10', 'P8', 'P4', 'CP2', 'CP6', 'T8', 'FT10', 'FC6', 'C4', 'FC2', 'F4', 'F8', 'Fp2']

# Select only EEG channels
raw.pick_channels(eeg_channels)

# Set montage
montage = mne.channels.make_standard_montage('standard_1020')
raw.set_montage(montage, on_missing='ignore')

# Apply preprocessing steps
raw.set_eeg_reference('average', projection=True)  # set EEG average reference
raw.filter(l_freq=0.5, h_freq=40)  # Apply band-pass filter

# # Apply ICA to the raw EEG data to remove artifacts
# from mne.preprocessing import ICA

# Set up ICA
ica = ICA(n_components=20, random_state=97, max_iter=800)

# Fit ICA on the raw EEG data
ica.fit(raw)

# Plot ICA components to manually inspect and identify components that capture eye blinks
ica.plot_components()

# Use built-in methods to automatically find EOG-like components (since we don't have EOG data, we use EEG channels)
eog_indices, eog_scores = ica.find_bads_eog(raw, ch_name='Fp1')  # assuming Fp1 is most likely to capture eye blinks
ica.plot_scores(eog_scores)

# Exclude the identified EOG-related components
ica.exclude = eog_indices

# Apply the ICA solution to the raw data to remove the artifacts
raw_clean = ica.apply(raw.copy())

# Plot the raw and cleaned data for comparison using MNE's plotting functions
print('Raw file')
raw.plot(title='Raw EEG Data')
print('Cleaned file')
raw_clean.plot(title='Cleaned EEG Data')
Extracting EDF parameters from e:\WEBSITE\EDF_analysis\EDF+\Zacker\Zacker.edf...
EDF file detected
Setting channel info structure...
Creating raw.info structure...
Reading 0 ... 139775  =      0.000 ...  1091.992 secs...
NOTE: pick_channels() is a legacy function. New code should use inst.pick(...).
EEG channel type selected for re-referencing
Adding average EEG reference projection.
1 projection items deactivated
Average reference projection was added, but has not been applied yet. Use the apply_proj method to apply it.
Filtering raw data in 1 contiguous segment
Setting up band-pass filter from 0.5 - 40 Hz

FIR filter parameters
---------------------
Designing a one-pass, zero-phase, non-causal bandpass filter:
- Windowed time-domain design (firwin) method
- Hamming window with 0.0194 passband ripple and 53 dB stopband attenuation
- Lower passband edge: 0.50
- Lower transition bandwidth: 0.50 Hz (-6 dB cutoff frequency: 0.25 Hz)
- Upper passband edge: 40.00 Hz
- Upper transition bandwidth: 10.00 Hz (-6 dB cutoff frequency: 45.00 Hz)
- Filter length: 845 samples (6.602 s)

Fitting ICA to data using 32 channels (please be patient, this may take a while)
[Parallel(n_jobs=1)]: Done  17 tasks      | elapsed:    0.0s
Selecting by number: 20 components
Fitting ICA took 3.7s.
No description has been provided for this image
Using EOG channel: Fp1
... filtering ICA sources
Setting up band-pass filter from 1 - 10 Hz

FIR filter parameters
---------------------
Designing a two-pass forward and reverse, zero-phase, non-causal bandpass filter:
- Windowed frequency-domain design (firwin2) method
- Hann window
- Lower passband edge: 1.00
- Lower transition bandwidth: 0.50 Hz (-12 dB cutoff frequency: 0.75 Hz)
- Upper passband edge: 10.00 Hz
- Upper transition bandwidth: 0.50 Hz (-12 dB cutoff frequency: 10.25 Hz)
- Filter length: 1280 samples (10.000 s)

... filtering target
Setting up band-pass filter from 1 - 10 Hz

FIR filter parameters
---------------------
Designing a two-pass forward and reverse, zero-phase, non-causal bandpass filter:
- Windowed frequency-domain design (firwin2) method
- Hann window
- Lower passband edge: 1.00
- Lower transition bandwidth: 0.50 Hz (-12 dB cutoff frequency: 0.75 Hz)
- Upper passband edge: 10.00 Hz
- Upper transition bandwidth: 0.50 Hz (-12 dB cutoff frequency: 10.25 Hz)
- Filter length: 1280 samples (10.000 s)

[Parallel(n_jobs=1)]: Done  17 tasks      | elapsed:    0.0s
No description has been provided for this image
Applying ICA to Raw instance
    Transforming to ICA space (20 components)
    Zeroing out 2 ICA components
    Projecting back using 32 PCA components
Raw file
No description has been provided for this image
Cleaned file
No description has been provided for this image
Out[ ]:
No description has been provided for this image
In [ ]:
# Let's try find_bads_muscle
# Remaining methods to try:
# find_bads_ecg
# find_bads_ref

import mne

# Read your EDF file
raw = mne.io.read_raw_edf(r'EDF+\Zacker\Zacker.edf', preload=True, verbose=True, infer_types=True)
eeg_channels = ['Cz', 'Fz', 'Fp1', 'F7', 'F3', 'FC1', 'C3', 'FC5', 'FT9', 'T7', 'CP5', 'CP1', 'P3', 'P7', 'PO9', 'O1', 'Pz', 'Oz', 'O2', 'PO10', 'P8', 'P4', 'CP2', 'CP6', 'T8', 'FT10', 'FC6', 'C4', 'FC2', 'F4', 'F8', 'Fp2']

# Select only EEG channels
raw.pick_channels(eeg_channels)

# Set montage
montage = mne.channels.make_standard_montage('standard_1020')
raw.set_montage(montage, on_missing='ignore')

# Apply preprocessing steps
raw.set_eeg_reference('average', projection=True)  # set EEG average reference
raw.filter(l_freq=0.5, h_freq=40)  # Apply band-pass filter

# Apply ICA to the raw EEG data to remove artifacts
from mne.preprocessing import ICA

# Set up ICA
ica = ICA(n_components=32, random_state=97, max_iter=800)

# Fit ICA on the raw EEG data
ica.fit(raw)

# Plot ICA components to manually inspect and identify components that capture eye blinks
ica.plot_components()

# Use built-in methods to automatically find EOG-like components (since we don't have EOG data, we use EEG channels)
eog_indices, eog_scores = ica.find_bads_muscle(raw)  # assuming Fp1 is most likely to capture eye blinks
ica.plot_scores(eog_scores)

# Exclude the identified EOG-related components
ica.exclude = eog_indices

# Apply the ICA solution to the raw data to remove the artifacts
raw_clean = ica.apply(raw.copy())

# Plot the raw and cleaned data for comparison using MNE's plotting functions
print('Raw file')
raw.plot(title='Raw EEG Data')
print('Cleaned file')
raw_clean.plot(title='Cleaned EEG Data')
Extracting EDF parameters from e:\WEBSITE\EDF_analysis\EDF+\Zacker\Zacker.edf...
EDF file detected
Setting channel info structure...
Creating raw.info structure...
Reading 0 ... 139775  =      0.000 ...  1091.992 secs...
NOTE: pick_channels() is a legacy function. New code should use inst.pick(...).
EEG channel type selected for re-referencing
Adding average EEG reference projection.
1 projection items deactivated
Average reference projection was added, but has not been applied yet. Use the apply_proj method to apply it.
Filtering raw data in 1 contiguous segment
Setting up band-pass filter from 0.5 - 40 Hz

FIR filter parameters
---------------------
Designing a one-pass, zero-phase, non-causal bandpass filter:
- Windowed time-domain design (firwin) method
- Hamming window with 0.0194 passband ripple and 53 dB stopband attenuation
- Lower passband edge: 0.50
- Lower transition bandwidth: 0.50 Hz (-6 dB cutoff frequency: 0.25 Hz)
- Upper passband edge: 40.00 Hz
- Upper transition bandwidth: 10.00 Hz (-6 dB cutoff frequency: 45.00 Hz)
- Filter length: 845 samples (6.602 s)

Fitting ICA to data using 32 channels (please be patient, this may take a while)
[Parallel(n_jobs=1)]: Done  17 tasks      | elapsed:    0.0s
Selecting by number: 32 components
Fitting ICA took 8.2s.
No description has been provided for this image
No description has been provided for this image
Effective window size : 16.000 (s)
No description has been provided for this image
Applying ICA to Raw instance
    Transforming to ICA space (32 components)
    Zeroing out 21 ICA components
    Projecting back using 32 PCA components
Raw file
No description has been provided for this image
Cleaned file
No description has been provided for this image
Out[ ]:
No description has been provided for this image

We don't seem to be able to find bad refs due to not having meg channels, requires further analysis¶

In [ ]:
# Let's try find_bads_ref
# Remaining methods to try:
# find_bads_ecg

import mne
from mne.preprocessing import ICA
# Read your EDF file
raw = mne.io.read_raw_edf(r'EDF+\Zacker\Zacker.edf', preload=True, verbose=True, infer_types=True)
#raw = mne.io.read_raw_edf(r'e:\EEG\Archive\DATA2OTHERFORMATS\zachary_edf+.edf', preload=True, verbose=True, infer_types=True)
eeg_channels = ['Cz', 'Fz', 'Fp1', 'F7', 'F3', 'FC1', 'C3', 'FC5', 'FT9', 'T7', 'CP5', 'CP1', 'P3', 'P7', 'PO9', 'O1', 'Pz', 'Oz', 'O2', 'PO10', 'P8', 'P4', 'CP2', 'CP6', 'T8', 'FT10', 'FC6', 'C4', 'FC2', 'F4', 'F8', 'Fp2']

# Select only EEG channels
raw.pick_channels(eeg_channels)

# Set montage
montage = mne.channels.make_standard_montage('standard_1020')
raw.set_montage(montage, on_missing='ignore')

# Apply preprocessing steps
raw.set_eeg_reference('average', projection=True)  # set EEG average reference
raw.filter(l_freq=0.5, h_freq=40)  # Apply band-pass filter

# Apply ICA to the raw EEG data to remove artifacts
# Set up ICA
ica = ICA(n_components=20, random_state=97, max_iter=800)

# Fit ICA on the raw EEG data
ica.fit(raw)

# Plot ICA components to manually inspect and identify components that capture eye blinks
ica.plot_components()

# Use built-in methods to automatically find EOG-like components (since we don't have EOG data, we use EEG channels)
eog_indices, eog_scores = ica.find_bads_ref(raw,ch_name='Fp1')  # assuming Fp1 is most likely to capture eye blinks
ica.plot_scores(eog_scores)

# Exclude the identified EOG-related components
ica.exclude = eog_indices

# Apply the ICA solution to the raw data to remove the artifacts
raw_clean = ica.apply(raw.copy())

# Plot the raw and cleaned data for comparison using MNE's plotting functions
raw.plot(title='Raw EEG Data')
raw_clean.plot(title='Cleaned EEG Data')
Extracting EDF parameters from e:\WEBSITE\EDF_analysis\EDF+\Zacker\Zacker.edf...
EDF file detected
Setting channel info structure...
Creating raw.info structure...
Reading 0 ... 139775  =      0.000 ...  1091.992 secs...
NOTE: pick_channels() is a legacy function. New code should use inst.pick(...).
EEG channel type selected for re-referencing
Adding average EEG reference projection.
1 projection items deactivated
Average reference projection was added, but has not been applied yet. Use the apply_proj method to apply it.
Filtering raw data in 1 contiguous segment
Setting up band-pass filter from 0.5 - 40 Hz

FIR filter parameters
---------------------
Designing a one-pass, zero-phase, non-causal bandpass filter:
- Windowed time-domain design (firwin) method
- Hamming window with 0.0194 passband ripple and 53 dB stopband attenuation
- Lower passband edge: 0.50
- Lower transition bandwidth: 0.50 Hz (-6 dB cutoff frequency: 0.25 Hz)
- Upper passband edge: 40.00 Hz
- Upper transition bandwidth: 10.00 Hz (-6 dB cutoff frequency: 45.00 Hz)
- Filter length: 845 samples (6.602 s)

Fitting ICA to data using 32 channels (please be patient, this may take a while)
[Parallel(n_jobs=1)]: Done  17 tasks      | elapsed:    0.0s
Selecting by number: 20 components
Fitting ICA took 3.2s.
No description has been provided for this image
---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
Cell In[13], line 34
     31 ica.plot_components()
     33 # Use built-in methods to automatically find EOG-like components (since we don't have EOG data, we use EEG channels)
---> 34 eog_indices, eog_scores = ica.find_bads_ref(raw,ch_name='Fp1')  # assuming Fp1 is most likely to capture eye blinks
     35 ica.plot_scores(eog_scores)
     37 # Exclude the identified EOG-related components

File <decorator-gen-349>:12, in find_bads_ref(self, inst, ch_name, threshold, start, stop, l_freq, h_freq, reject_by_annotation, method, measure, verbose)

File c:\Users\micha\AppData\Local\Programs\Python\Python312\Lib\site-packages\mne\preprocessing\ica.py:1915, in ICA.find_bads_ref(self, inst, ch_name, threshold, start, stop, l_freq, h_freq, reject_by_annotation, method, measure, verbose)
   1913 ref_picks = pick_types(self.info, meg=False, ref_meg=True)
   1914 if not any(meg_picks) or not any(ref_picks):
-> 1915     raise ValueError(
   1916         "ICA solution must contain both reference and" " MEG channels."
   1917     )
   1918 weights = self.get_components()
   1919 # take norm of component weights on reference channels for each
   1920 # component, divide them by the norm on the standard channels,
   1921 # log transform to approximate normal distribution

ValueError: ICA solution must contain both reference and MEG channels.

ECG artifacts seem least prevalent of all for eeg recordings¶

In [ ]:
# Let's try find_bads_ecg

# Let's try find_bads_ref
# Remaining methods to try:
# find_bads_ecg

import mne
from mne.preprocessing import ICA
# Read your EDF file
raw = mne.io.read_raw_edf(r'EDF+\Zacker\Zacker.edf', preload=True, verbose=True, infer_types=True)
#raw = mne.io.read_raw_edf(r'e:\EEG\Archive\DATA2OTHERFORMATS\zachary_edf+.edf', preload=True, verbose=True, infer_types=True)
eeg_channels = ['Cz', 'Fz', 'Fp1', 'F7', 'F3', 'FC1', 'C3', 'FC5', 'FT9', 'T7', 'CP5', 'CP1', 'P3', 'P7', 'PO9', 'O1', 'Pz', 'Oz', 'O2', 'PO10', 'P8', 'P4', 'CP2', 'CP6', 'T8', 'FT10', 'FC6', 'C4', 'FC2', 'F4', 'F8', 'Fp2']

# Select only EEG channels
raw.pick_channels(eeg_channels)

# Set montage
montage = mne.channels.make_standard_montage('standard_1020')
raw.set_montage(montage, on_missing='ignore')

# Apply preprocessing steps
raw.set_eeg_reference('average', projection=True)  # set EEG average reference
raw.filter(l_freq=0.5, h_freq=40)  # Apply band-pass filter

# Apply ICA to the raw EEG data to remove artifacts
# Set up ICA
ica = ICA(n_components=20, random_state=97, max_iter=800)

# Fit ICA on the raw EEG data
ica.fit(raw)

# Plot ICA components to manually inspect and identify components that capture eye blinks
ica.plot_components()

# Use built-in methods to automatically find EOG-like components (since we don't have EOG data, we use EEG channels)
eog_indices, eog_scores = ica.find_bads_ecg(raw,ch_name='Fp1')  # assuming Fp1 is most likely to capture eye blinks
ica.plot_scores(eog_scores)

# Exclude the identified EOG-related components
ica.exclude = eog_indices

# Apply the ICA solution to the raw data to remove the artifacts
raw_clean = ica.apply(raw.copy())

# Plot the raw and cleaned data for comparison using MNE's plotting functions
raw.plot(title='Raw EEG Data')
raw_clean.plot(title='Cleaned EEG Data')
Extracting EDF parameters from e:\WEBSITE\EDF_analysis\EDF+\Zacker\Zacker.edf...
EDF file detected
Setting channel info structure...
Creating raw.info structure...
Reading 0 ... 139775  =      0.000 ...  1091.992 secs...
NOTE: pick_channels() is a legacy function. New code should use inst.pick(...).
EEG channel type selected for re-referencing
Adding average EEG reference projection.
1 projection items deactivated
Average reference projection was added, but has not been applied yet. Use the apply_proj method to apply it.
Filtering raw data in 1 contiguous segment
Setting up band-pass filter from 0.5 - 40 Hz

FIR filter parameters
---------------------
Designing a one-pass, zero-phase, non-causal bandpass filter:
- Windowed time-domain design (firwin) method
- Hamming window with 0.0194 passband ripple and 53 dB stopband attenuation
- Lower passband edge: 0.50
- Lower transition bandwidth: 0.50 Hz (-6 dB cutoff frequency: 0.25 Hz)
- Upper passband edge: 40.00 Hz
- Upper transition bandwidth: 10.00 Hz (-6 dB cutoff frequency: 45.00 Hz)
- Filter length: 845 samples (6.602 s)

Fitting ICA to data using 32 channels (please be patient, this may take a while)
[Parallel(n_jobs=1)]: Done  17 tasks      | elapsed:    0.0s
Selecting by number: 20 components
Fitting ICA took 3.2s.
No description has been provided for this image
Using threshold: 0.44 for CTPS ECG detection
Using channel Fp1 to identify heart beats.
Setting up band-pass filter from 8 - 16 Hz

FIR filter parameters
---------------------
Designing a two-pass forward and reverse, zero-phase, non-causal bandpass filter:
- Windowed frequency-domain design (firwin2) method
- Hann window
- Lower passband edge: 8.00
- Lower transition bandwidth: 0.50 Hz (-12 dB cutoff frequency: 7.75 Hz)
- Upper passband edge: 16.00 Hz
- Upper transition bandwidth: 0.50 Hz (-12 dB cutoff frequency: 16.25 Hz)
- Filter length: 1280 samples (10.000 s)

Number of ECG events detected : 1025 (average pulse 56 / min.)
Not setting metadata
1025 matching events found
No baseline correction applied
Created an SSP operator (subspace dimension = 1)
Using data from preloaded Raw for 1025 events and 129 original time points ...
0 bad epochs dropped
No description has been provided for this image
Applying ICA to Raw instance
    Transforming to ICA space (20 components)
    Zeroing out 0 ICA components
    Projecting back using 32 PCA components
No description has been provided for this image
No description has been provided for this image
Out[ ]:
No description has been provided for this image